![]() IMPROVED SEPARATION METHOD AND COMPUTER PROGRAM PRODUCT
专利摘要:
A method of separating, into a mixing signal (w (t)), a pure specific contribution x (t) and a background noise contribution z (t) using a mixing signal modeling spectrogram V corresponding to the sum of a spectrogram of a specific reverberated contribution Vrev, y and of a spectrogram of the background contribution Vz, the spectrogram of the specific reverberant contribution depending on the spectrogram of the pure contribution Vx according to the model: where R is a matrix reverberation, f is a frequency step, t is a time step, and τ an integer between 1 and T; and minimizing a cost function (C) between the mixing signal spectrogram and the mixing signal modeling spectrogram. 公开号:FR3031225A1 申请号:FR1463482 申请日:2014-12-31 公开日:2016-07-01 发明作者:Romain Hennequin 申请人:Audionamix; IPC主号:
专利说明:
[0001] The present invention relates to methods for separating a plurality of contributions into an acoustic mixing signal, and in particular to separating a voice contribution, a background musical contribution, in a mixing acoustic signal. A soundtrack of a song includes a vocal contribution (the lyrics sung by one or more singers) and a musical contribution (accompanying music played by one or more instruments). A soundtrack of a film includes a vocal contribution (dialogues between actors) superimposed on a musical contribution (special sound effects and / or background music). Separation algorithms are known to separate the vocal contribution from the musical contribution into an original soundtrack. For example the article by Jean-Louis Durrieu et al. "An iterative approach to monaural musical mixture of soloing," in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Taipei, Taiwan, April 2009, pp. 105-108 discloses an underdetermined source separation algorithm separation algorithm based on a non-negative matrix decomposition, which separates the voice contribution from the background noise contribution. [0002] However, the known separation algorithms do not make it possible to correctly take into account the reverberation phenomenon affecting the components of the mixture. In the particular case of a vocal component, this one results from the superposition of the dry voice, or pure in what follows, corresponding to the recording of the sound emitted by the singer and which propagated directly towards the microphone recording, and reverberation, corresponding to the recording of the sound emitted by the singer but which has propagated indirectly to the recording microphone, that is to say by reflection, possibly multiple, on the walls from the recording room. Reverb, consisting of the echoes of the pure voice at a given moment, spreads over a time interval that can be significant (for example three seconds). In other words, at a given moment, the vocal contribution results from the superposition of the pure voice at this moment and the different echoes of the pure voice at previous moments. However, the known separation algorithms do not take into account the long-term effects of the reverberation affecting a component of the mixture. Ngoc Q. K. Duong, Emmanuel Vincent, and Remi Gribonval, 3031225 2 "Underdetermined reverberant audio source separation using a full-rank spatial covariance model," IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1830 - 1840, Sept 2010 deals with the instantaneous effects of spatial reverberation, but does not model the effects of memory, that is to say, taking into account the latency between the recording of the reverb. a sound and echo recording associated with this sound. Thus, the type of algorithm proposed by this document applies only to multichannel signals and does not allow a correct extraction of reverb effects, which can be found in music. In the case of a voice component, the reverberation that affects this component is distributed in the different components 10 obtained after the separation. The separate vocal component loses its richness and the accompanying music component is not of good quality. It should be noted that reverberation may be caused by the conditions under which sound is taken, but may also be added artificially during post-production of the soundtrack, essentially for aesthetic reasons. There is therefore a need for a method for separating contributions in a mixture, these contributions integrating a reverberation of the corresponding pure sound signal. More particularly, there is a need to separate a pure vocal contribution affected by reverberation, a background musical contribution, into a sound signal. The invention therefore aims to overcome this problem. The subject of the invention is therefore a method of separating, in a mixing acoustic signal, a pure specific contribution affected by reverberation and a background noise contribution, characterized in that it consists in separating the contribution Pure specificity x (t) and the background noise contribution z (t) using a spectral sound mixing modeling spectrogram V corresponding to the sum of a spectrogram of a specific reverberated contribution grev'Y and a spectrogram of the background contribution 17z, the spectrogram of the specific reverberant contribution dependent on the spectrogram of the pure contribution Vx according to the model: T Vrev.yf, t = Ifyx tr-F1Rf, t 2 = 1 where R is a matrix reverberation, f is a frequency step, t is a time step, and r is an integer between 1 and T; and minimizing a cost function between the spectrogram of the mixing signal and the mixing signal modeling spectrogram. [0003] According to other embodiments, the separation method comprises one or more of the following characteristics, taken in isolation or in any technically possible combination: the cost function uses a divergence between the spectrogram of the mixing signal and the mixing signal modeling spectrogram, especially the ITAKURA-SAITO divergence. the specific contribution being a vocal contribution, the spectrogram of the pure contribution 17x is modelized by: gx = (WF01-1F0) 0 (WKIIK) where WFO is a matrix of harmonic atoms, HFO is an activation matrix of the 10 atoms harmonic of the WFO matrix, WK is a matrix of filtering atoms, HK is an activation matrix of the filter atoms of the matrix WK, and where O is an operator corresponding to the term-term product between matrices. the minimization of the cost function implements multiplicative updating rules of the type: w70 ((wKHK) (R * t (17 grevO) 3-1))) HFO HFO WFO ((wKHK) (R * t grev0I3-2)) WIT ((WFOHFO) O (R * t grev0) 3-1))) 1/1/1 ((WFOHFO) (R * t grev03-2)) WK WK 0 ((WFOHFO) O ( R * t (vo grevO) 3-1))) Hic ((WFOHFO) O (R * t grev0) 3-2)) where O is an operator corresponding to the term term product between matrices (or vector); .00 is an operator corresponding to the exponentiation term of a matrix by a scalar; (DT is the transpose of a matrix, and 1T is a vector T x 1 whose all elements are equal to 1. the separation of the pure specific contribution x (t) and the contribution of background noise z (t) in using a spectral modeling of the acoustic mixing signal V constituting a second part of the process, the latter comprises a first part consisting of separating, in the acoustic mix signal, a specific contribution and the contribution of the background noise, without taking into account the the reverb, the spectrogram of the specific contribution being used as the initial value of the spectrogram of the specific reverberated contribution during the minimization of the cost function in a second part of the method. a function of cost similar to that of the second part, for the minimization of the cost function, the first part implements rules of multiplicative update of the type: WFO ((wKIIK) O (V O Vw-1)) HK HK W; ((WFOHFO) O (17W-2)) WK wK ((WFOHFO) O (VO 9 (15'-1)))) H ((WFOHFO) O (f7w-2)) Hii the process comprises, in the first part, following the minimization of the cost function, the application of an algorithm for monitoring the maximum power in the spectrogram of the specific contribution, said algorithm preferably being of the Viterbi algorithm type, and then zero of all spectrogram terms that are too far from the maximum power found. the spectrogram of the background contribution qz is modeled by a factorization in non-negative matrices: f7z = (WRHR) where WR is a matrix of elementary spectral models and HR is an activation matrix of the elementary models of the matrix WR, and in that the minimization of the cost function implements multiplicative updating rules of the type: HR HR O WR ((7 C) -2) ((V 017 (15'-1) H17 WR WR O ((VW-2) 1117, where O is an operator corresponding to the term-to-term product between matrices (or vector); .00 is an operator corresponding to the term-exponentiation of a matrix by a scalar; (.) T is the transpose of a matrix, and 1T is a vector T x 1 whose all elements are equal to 1. [0004] The invention also relates to a computer program product for carrying out the above method. [0005] HFO H -FO -n 1/1/70 ((WKHK) O (17 (15'-2)) Wx ((WFOHFO) O (V 09 (15'-1)) WR ((VO gw-1-) The invention will be better understood on reading the following description of a particular embodiment, given solely by way of illustrative and nonlimiting example, and with reference to the appended drawings in which: FIG. FIG. 1 is a block representation of the different steps of the separation process according to the invention, and FIGS. 2 and 3 correspond to graphs which result from tests making it possible to compare, according to known normative criteria, the results of FIG. 1. Referring to FIG. 1, the separation method 100 uses a temporal mixing acoustic signal w (t) to deliver a vocal acoustic signal y (t) and a signal. musical acoustics z (t) The signals are all acoustic signals, so that the qualifier of acoustics will be omitted in what is These signals are time signals. They depend on time t. [0006] The mixing acoustic signal is a source soundtrack, or at least an extract from a soundtrack. The acoustic mixing signal w (t) comprises a first so-called specific contribution and a second so-called accompanying contribution. In the present description, the first contribution is a vocal contribution and corresponds to words sung by a singer. The second contribution is a musical contribution and corresponds to the musical accompaniment of the singer. The vocal acoustic signal y (t) corresponds to the only vocal contribution, isolated from the rest of the mixing signal w (t), and the musical acoustic signal z (t) corresponds to the single musical contribution, isolated from the remainder of the signal. mixture w (t). In the present embodiment, it is considered that only the voice contribution is reverberated. The reverb is modeled in the following way: y (t) = r (t) * x (t) where x (t) is the pure vocal signal, ie the sound signal generated by the singer is 30 has spread directly to the recording microphone; and where r (t) is an impulse response, which is a distribution giving the amplitude of the echoes for each arrival time of the corresponding echo on the recording microphone, and where * is the convolution product. The pure speech signal x (t) is the free-field signal and the impulse response r (t) is characteristic of the acoustic environment of the recording. [0007] In the time-frequency domain, for non-negative spectrograms, this reverberation model can be approximated, as proposed in the document by Rita Singh, Bhiksha Raj, and Paris Smaragdis, "Latent-variable decomposition based dereverberation of monaural". Multi-channel Signed, "In IEEE International Conference 5 on Audio and Speech Signal Processing, Dallas, Texas, USA, March 2010, by: T ures, y = lux RV f, tf, tr-F1f, rr = 1 where Vres' Y is the spectrogram of the signal y (t), considered to be affected by reverberation, Vx is the spectrogram of the signal x (t), R is a reverberation matrix corresponding to the spectrogram of the impulse response r (t), and T the temporal dimension of H. [0008] The first step 110 of the method 100 consists of sampling the mixing signal w (t) and calculating a spectrogram V of the mixing signal w (t). This spectrogram is defined as the absolute value (or the square of the absolute value) of the short-term Fourier transform of the sampled signal w (t). For each time sampling step, the spectrogram comprises a frequency frame indicating, for each frequency sampling step, the instantaneous power of the signal. The spectrogram V is thus a matrix F x U, positive real numbers U represents the total number of frames which subdivide the duration of the signal of the mixture w (t). F is the total number of frequency sampling steps, which is generally between 200 and 2000. The method 100 then includes a first portion in which the speech signal is considered a pure speech signal without reverberation. In this first part, the mixing signal modeling spectrogram is the sum of the spectrogram of the speech signal 931, and the spectrogram of the musical signal gz. f7Y is the spectrogram of the signal y (t), considered unaffected by reverberation. This modeling is finally the usual modeling in the context of the methods of decomposition by factorization in non-negative matrices. It should be noted that â refers to a quantity which is an estimate of the quantity a. Thus, in the steps of the first departure of the method 100, it is sought to estimate the two output spectrograms, the sum of which at best approximates the spectrogram of the mixture: VV = 93 + 17z 3031225 7 The modeling of the voice signal is based on a Source / filter type voice production model, as proposed in Jean-Louis Durrieu et al. "An iterative approach to monaural musical mixture of soloing," in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Taipei, Taiwan, April 2009, pp. 105 - 5 108: 9Y = (WRIDHR0) 0 (WKHK) The first term of this modeling is the source of the voice, which corresponds to the excitation of the vocal chords: WFO is a matrix of harmonic atoms, which is predefined and is specific to the singer; HFO is an activation matrix indicating at each moment the harmonic atoms of the WFO matrix that are activated. [0009] The second term of this modeling is the filter of the voice, which corresponds to the filtering performed by the vocal tract: WK is a matrix of filtering atoms; HK is an activation matrix indicating at each moment the filter atoms of the matrix WK that are activated. The operator G corresponds to the term-by-term matrix multiplication of two matrices (also called Hadamard product). The modeling of the musical signal is based on a generic model of non-negative matrix factorization: 9z = (WRHR) WR columns can be seen as elemental spectral models and HR as an activation matrix of these elementary models over of time. The first part of the process then consists of estimating the matrices HFO, WK, HK, WR and HR. In order to estimate the parameters of these matrices, a cost function C, based on a divergence d by element, is used: C = D (1719Y + 17z) = Ef, td (vft I vft + eit) In the mode Currently contemplated embodiment, the divergence of Itakura-Saito, well known to those skilled in the art, is used. It is written: a a d (alb) = Tp-log Tp-1 In step 120, the cost function C is thus minimized so as to determine the optimum value of each parameter of each matrix. This minimization is performed by iterations, with multiplicative updating rules which are successively applied to each of the parameters of the matrices HFO, WK, HK, WR and HR. [0010] These updating rules are for example elaborated by considering the gradient (that is to say the partial derivative) of the cost function C with respect to each parameter. More precisely, the gradient of the cost function with respect to the parameter under consideration is written in the form of a difference between two positive terms, and the corresponding updating rule is a multiplication of the parameter considered by the ratio of these two terms. terms. This allows in particular that the parameters remain non-negative at each update and become constant when the gradient of the cost function with respect to the parameter considered tends to zero. [0011] In this way, the parameters evolve towards a local minimum. The updating rules are as follows: W70 ((WKFIK) O (VO -1 / W-1)) HR HR O WR ((7 C) - 2) ((V 017 (15'-1) H17 (W-WR O) ((VW-2) 1117, where O is an operator corresponding to the term-to-term product between matrices (or vector); .00 is an operator corresponding to the term-exponentiation of a matrix by a scalar; (.) T is the transpose of a matrix. [0012] For this first part, all the parameters are initialized with non-negative values chosen randomly. Then, in step 130, the HFO matrix is constrained by using a tracking algorithm such as the Viterbi tracking algorithm in order to select, for each time step, the frequency step in which there is a maximum of power without being too far in frequency from the power maxima selected for the previous time steps. Then, in step 140, the coefficients of the HFO matrix that are at a frequency distance greater than a reference distance are set to 0. A 1-11F0 matrix is obtained. HFO HK HK W; ((WFOHFO) O (17W-2)) ((WFOHFO) O (VO 9 (15'-1)))) Hx WK wK ((WFOHFO) O (f7w-2)) 1-iii H -FO -n 1/1/70 ((WKHK) O (17 (15'-2)) Wx ((WFOHFO) O (V 09 (15'-1)) WR ((VO 90fl-1-) 3031225 9 In the second part of the method 100, the voice signal is considered to be affected by reverberation.The modeling of the vocal signal considered as reverberated, grev.y, as a function of the pure speech signal gx, is then written: [grev.311t = [, 73, * t R = x 17 f, tr-FtR f, r 2 = 1 5 where * t denotes a line-by-line convolution operator as explained in the right-hand side of the above equation. R has no time (of the same duration as a sampling step of the mixing signal), and F no frequency sampling, T is predetermined by the user and is generally between 20 and 200, for example 100 . [0013] In addition, as above, the f7x spectrogram of the pure signal is modeled by: gx = (WF01-1F0) 0 (WKIIK) The second part of the process then consists in estimating the matrices HFO, WK, HK, W125 HR and R that allow to approximate the spectrogram of the mixture V: grev = grev, y gz In order to estimate the parameters of these matrices, a function of cost C, based on a divergence d by element, is used: C = D (17117rev In the presently contemplated embodiment, the Itakura-Saito divergence, well known to those skilled in the art, is used. It is written: a a d (alb) = Tp-log Tp-1 Advantageously, the cost function of the second part is similar to that used in the first part. In step 220, the cost function C is then minimized so as to determine the optimum value of each parameter of each matrix, in particular the parameters of the reverberation matrix. This minimization is performed by iterations with multiplicative updating rules, which are successively applied to each of the parameters of the matrices. For the matrices of the voice component integrating a reverb, we have: 3031225 WK WK 0 ((WFOHFO) O (R * t grevC)) 3-2)) Hic where * t denotes the line-by-line convolution operator such that defined above. For the musical component, we have, as in the first part of the method: Wjo Weil () O (R VrevC)) 3-1))) HFO HFO W70 ((WKHK) O (R * t grevC)) 3- 2)) WIT ((WFOHFO) O (R * t grevC)) 3-1))) WIT ((WFOHFO) (R * t grevC)) 3-2)) ((WFOHFO) O (R * t grevC) ) 3-1))) HK HK 0 wiT ((grevC)) 3-1) HR HR 0 WRerevC)) 3-2) ((wRHR) (V VrevC)) 3-1) Hii WR WR erevC)) 3-2) H7R 'As regards the HFO matrix, the iterations start from the matrix Hi Fo determined in the first part of the process. It should be noted that since the update rules are multiplicative, the HFO matrix coefficients initially set to 0 will remain at 0 during the minimization of the cost function in the second part of the process. When the distance between the mixing spectrogram V and the estimated spectrogram gr "+ gz is less than a predetermined threshold or when a predetermined limit number of iterations is reached, the process leaves the iteration loop. and the values of the matrices R, HFO, WK, HK, WR and HR are the final values.In step 230, conventional adapted treatments (in particular a Wiener filtering type treatment) are applied to the previous spectrograms to obtain In particular, the spectrograms of interest gx, gz, then, in step 240, a transformation inverse to that of step 110 is performed on these spectrograms to obtain the output signals, pure speech signal x (t) and signal In the embodiments described here in detail, these acoustic signals are monophonic signals, alternatively, these signals are stereophonic, and, more generally, they are multichannel. It is necessary to adapt to stereophonic or multichannel signals the treatments presented for the case of monophonic signals. The preferred embodiment relates to a specific component or interest component that is a voice component. However, the modeling of the reverberation of a component is general and applies to any type of component. In particular, the background sound component can also be affected by reverberation. In addition, any type of non-negative non-reverberated sound spectrograms may also be used in place of those used above. Moreover, in the embodiment presented above, the mixture comprises two components. Generalization to any number of components is straightforward. Comparative tests have been carried out in order to compare the results of the implementation of the present method: the first method is a separation, based on an NMF-type method, without including modeling on the reverberation; the second method is a separation according to the method described above, that is to say including a modeling of the reverberation of the voice signal; and the third method is a theoretical mathematical limit. In order to quantify the results obtained for the various processes, standard indicators of the field of source separation have been calculated. These indicators are the signal-to-distortion ratio (SDR), which corresponds to a quantitative test; the signal to Artefact 20 SAR (Signal to Artefact Ratio) ratio, which corresponds to the artifacts in the separate components; and signal-to-interference ratio (SIR), which corresponds to the residual interference between the separate components. The results are shown in Figures 2 for the speech signal and Figure 3 for the musical signal. The method according to the invention therefore improves the results obtained, whatever the way of analyzing them.
权利要求:
Claims (8) [0001] CLAIMS1.- A separation method (100), in a mixing acoustic signal (w (0), of a specific pure contribution, affected by reverberation, and of a background noise contribution, characterized in that consists in separating the pure specific contribution x (t) and the background noise contribution z (t) by using a spectral modeling spectrogram of the acoustic mixing signal V corresponding to the sum of a spectrogram of a specific firev'Y reverberated contribution and a spectrogram of the background contribution 17z, the spectrogram of the specific reverberant contribution depending on the spectrogram of the pure specific contribution 17x according to the model: T 9fr, te12.31 = II2fx, tr-F1Rf, t T = 1 where R is a reverberation matrix, f is a frequency step, t is a time step, and T an integer between 1 and T, and minimizing a cost function (C) between a spectrogram of the mixing signal and the spectrogram of modeling mixing signal. [0002] 2. Method according to claim 1, characterized in that the cost function (C) uses a divergence (d) between the spectrogram of the mixing signal and the mixing signal modeling spectrogram, in particular the divergence of ITAKURA- SAITO. [0003] 3. Method according to any one of the preceding claims, characterized in that, the pure specific contribution being a vocal contribution, the spectrogram of the pure specific contribution gx is modeled by: f7x = (WFTIFIF0) 0 (WKI / K) where WFO is a matrix of harmonic atoms, HFO is an activation matrix of the harmonic atoms of the WFO matrix, FO, WK is a matrix of filtering atoms, 1 / K is an activation matrix of the filtering atoms of matrix WK, and where O is an operator corresponding to the term term product between matrices. [0004] 4. A method according to any one of the preceding claims, characterized in that, the separation of the pure specific contribution x (t) and the background noise contribution z (t) by using an acoustic signal modeling spectrogram 3031225 13 mixture 17 constituting a second part of the process, the latter comprises a first part consisting in separating, in the acoustic mixing signal (w (t)), a specific contribution and the contribution of the background noise, without taking into account the reverberation , the spectrogram of the specific contribution being used as the initial value of the spectrogram of the specific reverberated contribution during the minimization of the cost function in a second part of the process. [0005] 5. Method according to claim 4, characterized in that the first part comprises the minimization of a cost function similar to that of the second part. 10 [0006] 6. A method according to claim 5, characterized in that it comprises, in the first part of the method, following the minimization of the cost function, the application of a maximum power tracking algorithm in the spectrogram of the specific contribution, the said algorithm being preferably of the Viterbi algorithm type, then the zeroing of all spectrogram terms which are too far from the maximum power found. [0007] 7. A method according to any one of the preceding claims, characterized in that the spectrogram of the background contribution qz is modeled by a factorization in non-negative matrices: = (WRHR) where WR is a matrix of spectral models elementary and HR is an activation matrix of the elementary models of the matrix WR [0008] 8. Computer program product, characterized in that it comprises instructions adapted to be stored in the memory of a computer and executed by the processor of said computer to implement a separation method according to any one of the preceding claims.
类似技术:
公开号 | 公开日 | 专利标题 EP3040989B1|2018-10-17|Improved method of separation and computer program product KR20130108391A|2013-10-02|Method, apparatus and machine-readable storage medium for decomposing a multichannel audio signal JP2009128906A|2009-06-11|Method and system for denoising mixed signal including sound signal and noise signal WO2005106852A1|2005-11-10|Improved voice signal conversion method and system WO2015196729A1|2015-12-30|Microphone array speech enhancement method and device Wisdom et al.2015|Enhancement and recognition of reverberant and noisy speech by extending its coherence Dumortier et al.2014|Blind RT60 estimation robust across room sizes and source distances Fitzgerald et al.2016|PROJET—Spatial audio separation using projections Kilgour et al.2018|Fr'echet Audio Distance: A Metric for Evaluating Music Enhancement Algorithms JP2016143042A|2016-08-08|Noise removal system and noise removal program FR3013885A1|2015-05-29|METHOD AND SYSTEM FOR SEPARATING SPECIFIC CONTRIBUTIONS AND SOUND BACKGROUND IN ACOUSTIC MIXING SIGNAL US10614827B1|2020-04-07|System and method for speech enhancement using dynamic noise profile estimation JP3849679B2|2006-11-22|Noise removal method, noise removal apparatus, and program JP2006227328A|2006-08-31|Sound processor JP2016500847A|2016-01-14|Digital processor based complex acoustic resonance digital speech analysis system Padaki et al.2013|Single channel speech dereverberation using the LP residual cepstrum Liu et al.2016|Speech enhancement of instantaneous amplitude and phase for applications in noisy reverberant environments Löllmann et al.2019|Comparative study of single-channel algorithms for blind reverberation time estimation Adiloğlu et al.2012|A general variational Bayesian framework for robust feature extraction in multisource recordings Chen et al.2021|A dual-stream deep attractor network with multi-domain learning for speech dereverberation and separation CN109644304B|2021-07-13|Source separation for reverberant environments EP2452293A1|2012-05-16|Source location EP2901447B1|2016-12-21|Method and device for separating signals by minimum variance spatial filtering under linear constraint Wager et al.2018|Collaborative Speech Dereverberation: Regularized Tensor Factorization for Crowdsourced Multi-Channel Recordings JP4313740B2|2009-08-12|Reverberation removal method, program, and recording medium
同族专利:
公开号 | 公开日 EP3040989B1|2018-10-17| EP3040989A1|2016-07-06| US20160189731A1|2016-06-30| US9711165B2|2017-07-18| FR3031225B1|2018-02-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP5195652B2|2008-06-11|2013-05-08|ソニー株式会社|Signal processing apparatus, signal processing method, and program| US20130282373A1|2012-04-23|2013-10-24|Qualcomm Incorporated|Systems and methods for audio signal processing| US9549253B2|2012-09-26|2017-01-17|Foundation for Research and Technology—Hellas Institute of Computer Science |Sound source localization and isolation apparatuses, methods and systems|FR3013885B1|2013-11-28|2017-03-24|Audionamix|METHOD AND SYSTEM FOR SEPARATING SPECIFIC CONTRIBUTIONS AND SOUND BACKGROUND IN ACOUSTIC MIXING SIGNAL| EP3507993B1|2016-08-31|2020-11-25|Dolby Laboratories Licensing Corporation|Source separation for reverberant environment| EP3324407A1|2016-11-17|2018-05-23|Fraunhofer Gesellschaft zur Förderung der Angewand|Apparatus and method for decomposing an audio signal using a ratio as a separation characteristic| EP3324406A1|2016-11-17|2018-05-23|Fraunhofer Gesellschaft zur Förderung der Angewand|Apparatus and method for decomposing an audio signal using a variable threshold|
法律状态:
2015-10-15| PLFP| Fee payment|Year of fee payment: 2 | 2016-07-01| PLSC| Publication of the preliminary search report|Effective date: 20160701 | 2016-11-18| PLFP| Fee payment|Year of fee payment: 3 | 2017-09-29| PLFP| Fee payment|Year of fee payment: 4 | 2018-09-19| PLFP| Fee payment|Year of fee payment: 5 | 2019-12-03| PLFP| Fee payment|Year of fee payment: 6 | 2020-12-07| PLFP| Fee payment|Year of fee payment: 7 | 2021-11-17| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1463482|2014-12-31| FR1463482A|FR3031225B1|2014-12-31|2014-12-31|IMPROVED SEPARATION METHOD AND COMPUTER PROGRAM PRODUCT|FR1463482A| FR3031225B1|2014-12-31|2014-12-31|IMPROVED SEPARATION METHOD AND COMPUTER PROGRAM PRODUCT| EP15198713.8A| EP3040989B1|2014-12-31|2015-12-09|Improved method of separation and computer program product| US14/984,089| US9711165B2|2014-12-31|2015-12-30|Process and associated system for separating a specified audio component affected by reverberation and an audio background component from an audio mixture signal| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|